9 research outputs found

    Learning Visual Classifiers From Limited Labeled Images

    Get PDF
    Recognizing humans and their activities from images and video is one of the key goals of computer vision. While supervised learning algorithms like Support Vector Machines and Boosting have offered robust solutions, they require large amount of labeled data for good performance. It is often difficult to acquire large labeled datasets due to the significant human effort involved in data annotation. However, it is considerably easier to collect unlabeled data due to the availability of inexpensive cameras and large public databases like Flickr and YouTube. In this dissertation, we develop efficient machine learning techniques for visual classification from small amount of labeled training data by utilizing the structure in the testing data, labeled data in a different domain and unlabeled data. This dissertation has three main parts. In the first part of the dissertation, we consider how multiple noisy samples available during testing can be utilized to perform accurate visual classification. Such multiple samples are easily available in video-based recognition problem, which is commonly encountered in visual surveillance. Specifically, we study the problem of unconstrained human recognition from iris images. We develop a Sparse Representation-based selection and recognition scheme, which learns the underlying structure of clean images. This learned structure is utilized to develop a quality measure, and a quality-based fusion scheme is proposed to combine the varying evidence. Furthermore, we extend the method to incorporate privacy, an important requirement inpractical biometric applications, without significantly affecting the recognition performance. In the second part, we analyze the problem of utilizing labeled data in a different domain to aid visual classification. We consider the problem of shifts in acquisition conditions during training and testing, which is very common in iris biometrics. In particular, we study the sensor mismatch problem, where the training samples are acquired using a sensor much older than the one used for testing. We provide one of the first solutions to this problem, a kernel learning framework to adapt iris data collected from one sensor to another. Extensive evaluations on iris data from multiple sensors demonstrate that the proposed method leads to considerable improvement in cross sensor recognition accuracy. Furthermore, since the proposed technique requires minimal changes to the iris recognition pipeline, it can easily be incorporated into existing iris recognition systems. In the last part of the dissertation, we analyze how unlabeled data available during training can assist visual classification applications. Here, we consider still image-based vision applications involving humans, where explicit motion cues are not available. A human pose often conveys not only the configuration of the body parts, but also implicit predictive information about the ensuing motion. We propose a probabilistic framework to infer this dynamic information associated with a human pose, using unlabeled and unsegmented videos available during training. The inference problem is posed as a non-parametric density estimation problem on non-Euclidean manifolds. Since direct modeling is intractable, we develop a data driven approach, estimating the density for the test sample under consideration. Statistical inference on the estimated density provides us with quantities of interest like the most probable future motion of the human and the amount of motion informatio

    Dictionary Learning from Ambiguously Labeled Data

    No full text
    We propose a novel dictionary-based learning method for ambiguously labeled multiclass classification, where each training sample has multiple labels and only one of them is the correct label. The dictionary learning problem is solved using an iterative alternating algorithm. At each iteration of the algorithm, two alternating steps are performed: a confidence update and a dictionary update. The confidence of each sample is defined as the probability distribution on its ambiguous labels. The dictionaries are updated using either soft (EM-based) or hard decision rules. Extensive evaluations on existing datasets demonstrate that the proposed method performs significantly better than state-of-the-art ambiguously labeled learning approaches. 1

    Chromatic exclusivity hypothesis and the physical basis of floral color

    No full text
    International audienceThis paper presents the results of floral spectral studies on 1275 flowers from India, Brazil, Israel, Germany, and Norway. Floral spectral reflectance from 400 to 700 nm (nm) was used to quantitatively represent ‘human-perceived’ color of flowers in Red, Green, Blue color space. Floral spectral reflectance from 350 to 600 nm was used to discern and objectively represent ‘insect pollinator-perceived’ flower colors in color hexagon. We leverage the advantage offered by ‘quantified human perception’ provided by ‘human-perceived’ floral colors to represent the distribution of floral hues and uncover the relationship between the composition of incoming solar radiation and predominant ‘human-perceived’ floral colors at the tropics and the higher latitudes. Further, the observed species-level mutual exclusivity of ‘insect pollinator-perceived’ floral colors is stated as chromatic exclusivity hypothesis. We compare ‘human-perceived’ and ‘insect pollinator-perceived’ floral colors at Trivandrum (India) and provide a physical explanation for short and long ‘wavelength triads’ of insect pollinator and human visual sensitivity respectively

    Quantitative representation of floral colors

    No full text
    International audienceHuman and insect pollinator perceived floral colors of 81 species of angiosperms (flowering plants) from Trivandrum (Kerala, India) was represented using the CIE 1976 L*a*b* color space and color hexagon, respectively. The floral color difference among human perceived red, yellow, and blue-hued flowers and that of each flower from its respective pure hue was calculated using the CIE E 2000 formula. Human perceived floral color difference values were consistently higher than 3.5, indicating the uniqueness of floral colors. Flowers perceived red and yellow by humans were dominant and of comparable proportions. Insect pollinators perceive most of the flowers as blue-green. Quantitative representation of human and pollinator perceived floral colors would be invaluable to understand the information broadcasted by flowers. It can form the basis of flower grading in the floriculture industry and underpin objectivity in evolving the framework for national pollinator strategies
    corecore